670 research outputs found

    Portfolio Allocation for Bayesian Optimization

    Full text link
    Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the model's estimate of the objective and the uncertainty at any given point. However, there are several different parameterized acquisition functions in the literature, and it is often unclear which one to use. Instead of using a single acquisition function, we adopt a portfolio of acquisition functions governed by an online multi-armed bandit strategy. We propose several portfolio strategies, the best of which we call GP-Hedge, and show that this method outperforms the best individual acquisition function. We also provide a theoretical bound on the algorithm's performance.Comment: This revision contains an updated the performance bound and other minor text change

    Predictive Entropy Search for Efficient Global Optimization of Black-box Functions

    Full text link
    We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES). At each iteration, PES selects the next evaluation point that maximizes the expected information gained with respect to the global maximum. PES codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. This reformulation allows PES to obtain approximations that are both more accurate and efficient than other alternatives such as Entropy Search (ES). Furthermore, PES can easily perform a fully Bayesian treatment of the model hyperparameters while ES cannot. We evaluate PES in both synthetic and real-world applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance

    An Entropy Search Portfolio for Bayesian Optimization

    Full text link
    Bayesian optimization is a sample-efficient method for black-box global optimization. How- ever, the performance of a Bayesian optimization method very much depends on its exploration strategy, i.e. the choice of acquisition function, and it is not clear a priori which choice will result in superior performance. While portfolio methods provide an effective, principled way of combining a collection of acquisition functions, they are often based on measures of past performance which can be misleading. To address this issue, we introduce the Entropy Search Portfolio (ESP): a novel approach to portfolio construction which is motivated by information theoretic considerations. We show that ESP outperforms existing portfolio methods on several real and synthetic problems, including geostatistical datasets and simulated control tasks. We not only show that ESP is able to offer performance as good as the best, but unknown, acquisition function, but surprisingly it often gives better performance. Finally, over a wide range of conditions we find that ESP is robust to the inclusion of poor acquisition functions.Comment: 10 pages, 5 figure

    Decoding Guilty Minds

    Get PDF
    A central tenet of Anglo-American penal law is that in order for an actor to be found criminally liable, a proscribed act must be accompanied by a guilty mind. While it is easy to understand the importance of this principle in theory, in practice it requires jurors and judges to decide what a person was thinking months or years earlier at the time of the alleged offense, either about the results of his conduct or about some elemental fact (such as whether the briefcase he is carrying contains drugs). Despite the central importance of this task in the administration of criminal justice, there has been very little research investigating how people go about making these decisions, and how these decisions relate to their intuitions about culpability. Understanding the cognitive mechanisms that govern this task is important for the law, not only to explore the possibility of systemic biases and errors in attributions of culpability but also to probe the intuitions that underlie them. In a set of six exploratory studies reported here, we examine the way in which individuals infer others’ legally relevant mental states about elemental facts, using the framework established over fifty years ago by the Model Penal Code (“MPC”). The widely adopted MPC framework delineates and defines the four now-familiar culpable mental states: purpose, knowledge, recklessness, and negligence. Our studies reveal that with little to no training, jury-eligible Americans can apply the MPC framework in a manner that is largely congruent with the basic assumptions of the MPC’s mental state hierarchy. However, our results also indicate that subjects’ intuitions about the level of culpability warranting criminal punishment diverge significantly from prevailing legal practice; subjects tend to regard recklessness as a sufficient basis for punishment under circumstances where the legislatures and courts tend to require knowledge
    • 

    corecore